-
Notifications
You must be signed in to change notification settings - Fork 148
bpf: improve the general precision of tnum_mul #9498
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: bpf-next_base
Are you sure you want to change the base?
Conversation
Upstream branch: dc0fe95 |
2530e45
to
61c9cef
Compare
Upstream branch: c80d797 |
ba77d27
to
24b0298
Compare
61c9cef
to
715d6cb
Compare
Upstream branch: 3ec8560 |
24b0298
to
e85f4be
Compare
715d6cb
to
506c27a
Compare
Upstream branch: 1274163 |
e85f4be
to
9d1835a
Compare
506c27a
to
76c716d
Compare
Upstream branch: d87fdb1 |
This commit addresses a challenge explained in an open question ("How can we incorporate correlation in unknown bits across partial products?") left by Harishankar et al. in their paper: https://arxiv.org/abs/2105.05398 When LSB(a) is uncertain, we know for sure that it is either 0 or 1, from which we could find two possible partial products and take a union. Experiment shows that applying this technique in long multiplication improves the precision in a significant number of cases (at the cost of losing precision in a relatively lower number of cases). This commit also removes the value-mask decomposition technique employed by Harishankar et al., as its direct incorporation did not result in any improvements for the new algorithm. Signed-off-by: Nandakumar Edamana <[email protected]>
9d1835a
to
2d2a446
Compare
Pull request for series with
subject: bpf: improve the general precision of tnum_mul
version: 2
url: https://patchwork.kernel.org/project/netdevbpf/list/?series=991951